Aggregating Caches: A Mechanism for Implicit File Prefetching

نویسندگان

  • Ahmed Amer
  • Darrell D. E. Long
چکیده

We introduce the aggregating cache, and demonstrate how it can be used to reduce the number of file retrieval requests made by a caching client, improving storage system performance by reducing the impact of latency. The aggregating cache utilizes predetermined groupings of files to perform group retrievals. These groups are maintained by the server, and built dynamically using observed inter-file relationships. Through a simple analytical model we demonstrate how this mechanism has the potential to reduce average latencies by % to %. Through trace-based simulation we demonstrate that a simple aggregating cache can reduce the number of demand fetches by almost %, while simultaneously improving cache hit ratios by up to %.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Avoiding the Cache-Coherence Problem in a Parallel/Distributed File System

In this paper we present PAFS, a new parallel/distributed le system. Within the whole le system, special interest is placed on the caching and prefetching mechanisms. We present a cooperative cache that avoids the coherence problem while it continues to be highly scalable and achieves very good performance. We also present an aggressive prefetching algorithm that allows full utilization of the ...

متن کامل

The Split replacement policy for caches with prefetch blocks

Prefetching is an inbuilt feature of file system and storage caches. The cache replacement policy plays a key role in the performance of prefetching techniques, since a miss occurs if a prefetch block is evicted before the arrival of the on-demand user request for the block. Prefetch blocks display spatial locality, but existing cache replacement policies are designed for blocks that display te...

متن کامل

Stride-directed Prefetching for Secondary Caches

Thi s paper studies hardware pre fe tch ingfor second-level ( L 2 ) caches. Previous work o n prefetching has been extensive but largely directed a t p r imary caches. In some cases only L 2 prefetching i s possible or i s more appropriate. B y s tudying L2 prefetching characterist ics we show that existing stride-directed methods [l, 81 for L1 caches do no t work as well in L2 caches. W e prop...

متن کامل

PACA: A Cooperative File System Cache for Parallel Machines

A new cooperative caching mechanism, PACA, along with a caching algorithm, LRU-Interleaved, and an aggressive prefetching algorithm , Full-File-On-Open, are presented. The caching algorithm is especially targeted to parallel machines running a microkernel-based operating system. It avoids the cache coherence problem with no loss in performance. Comparing our algorithm with N-Chance Forwarding ,...

متن کامل

Reducing File System Latency using a Predictive Approach

Despite impressive advances in file system throughput resulting from technologies such as high-bandwidth networks and disk arrays, file system latency has not improved and in many cases has become worse. Consequently, file system I/O remains one of the major bottlenecks to operating system performance [10]. This paper investigates an automated predictive approach towards reducing file latency. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001